54 research outputs found

    Constructing Parsimonious Analytic Models for Dynamic Systems via Symbolic Regression

    Full text link
    Developing mathematical models of dynamic systems is central to many disciplines of engineering and science. Models facilitate simulations, analysis of the system's behavior, decision making and design of automatic control algorithms. Even inherently model-free control techniques such as reinforcement learning (RL) have been shown to benefit from the use of models, typically learned online. Any model construction method must address the tradeoff between the accuracy of the model and its complexity, which is difficult to strike. In this paper, we propose to employ symbolic regression (SR) to construct parsimonious process models described by analytic equations. We have equipped our method with two different state-of-the-art SR algorithms which automatically search for equations that fit the measured data: Single Node Genetic Programming (SNGP) and Multi-Gene Genetic Programming (MGGP). In addition to the standard problem formulation in the state-space domain, we show how the method can also be applied to input-output models of the NARX (nonlinear autoregressive with exogenous input) type. We present the approach on three simulated examples with up to 14-dimensional state space: an inverted pendulum, a mobile robot, and a bipedal walking robot. A comparison with deep neural networks and local linear regression shows that SR in most cases outperforms these commonly used alternative methods. We demonstrate on a real pendulum system that the analytic model found enables a RL controller to successfully perform the swing-up task, based on a model constructed from only 100 data samples

    Toward Physically Plausible Data-Driven Models: A Novel Neural Network Approach to Symbolic Regression

    Full text link
    Many real-world systems can be described by mathematical models that are human-comprehensible, easy to analyze and help explain the system's behavior. Symbolic regression is a method that can automatically generate such models from data. Historically, symbolic regression has been predominantly realized by genetic programming, a method that evolves populations of candidate solutions that are subsequently modified by genetic operators crossover and mutation. However, this approach suffers from several deficiencies: it does not scale well with the number of variables and samples in the training data - models tend to grow in size and complexity without an adequate accuracy gain, and it is hard to fine-tune the model coefficients using just genetic operators. Recently, neural networks have been applied to learn the whole analytic model, i.e., its structure and the coefficients, using gradient-based optimization algorithms. This paper proposes a novel neural network-based symbolic regression method that constructs physically plausible models based on even very small training data sets and prior knowledge about the system. The method employs an adaptive weighting scheme to effectively deal with multiple loss function terms and an epoch-wise learning process to reduce the chance of getting stuck in poor local optima. Furthermore, we propose a parameter-free method for choosing the model with the best interpolation and extrapolation performance out of all the models generated throughout the whole learning process. We experimentally evaluate the approach on four test systems: the TurtleBot 2 mobile robot, the magnetic manipulation system, the equivalent resistance of two resistors in parallel, and the longitudinal force of the anti-lock braking system. The results clearly show the potential of the method to find parsimonious models that comply with the prior knowledge provided

    Multiscale-Spectral GFEM and optimal oversampling

    Get PDF
    In this work we address the Multiscale Spectral Generalized Finite Element Method (MS-GFEM) developed in Babuška and Lipton (2011). We outline the numerical implementation of this method and present simulations that demonstrate contrast independent exponential convergence of MS-GFEM solutions. We introduce strategies to reduce the computational cost of generating the optimal oversampled local approximating spaces used here. These strategies retain accuracy while reducing the computational work necessary to generate local bases. Motivated by oversampling we develop a nearly optimal local basis based on a partition of unity on the boundary and the associated A-harmonic extensions

    A Security Risk Taxonomy for Large Language Models

    Full text link
    As large language models (LLMs) permeate more and more applications, an assessment of their associated security risks becomes increasingly necessary. The potential for exploitation by malicious actors, ranging from disinformation to data breaches and reputation damage, is substantial. This paper addresses a gap in current research by focusing on the security risks posed by LLMs, which extends beyond the widely covered ethical and societal implications. Our work proposes a taxonomy of security risks along the user-model communication pipeline, explicitly focusing on prompt-based attacks on LLMs. We categorize the attacks by target and attack type within a prompt-based interaction scheme. The taxonomy is reinforced with specific attack examples to showcase the real-world impact of these risks. Through this taxonomy, we aim to inform the development of robust and secure LLM applications, enhancing their safety and trustworthiness

    Optimistic planning for continuous–action deterministic systems.

    Get PDF
    Abstract : We consider the optimal control of systems with deterministic dynamics, continuous, possibly large-scale state spaces, and continuous, low-dimensional action spaces. We describe an online planning algorithm called SOOP, which like other algorithms in its class has no direct dependence on the state space structure. Unlike previous algorithms, SOOP explores the true solution space, consisting of infinite sequences of continuous actions, without requiring knowledge about the smoothness of the system. To this end, it borrows the principle of the simultaneous optimistic optimization method, and develops a nontrivial adaptation of this principle to the planning problem. Experiments on four problems show SOOP reliably ranks among the best algorithms, fully dominating competing methods when the problem requires both long horizons and fine discretization

    Learning Assembly Tasks in a Few Minutes by Combining Impedance Control and Residual Recurrent Reinforcement Learning

    Get PDF
    Adapting to uncertainties is essential yet challenging for robots while conducting assembly tasks in real-world scenarios. Reinforcement learning (RL) methods provide a promising solution for these cases. However, training robots with RL can be a data-extensive, time-consuming, and potentially unsafe process. In contrast, classical control strategies can have near-optimal performance without training and be certifiably safe. However, this is achieved at the cost of assuming that the environment is known up to small uncertainties. Herein, an architecture aiming at getting the best out of the two worlds, by combining RL and classical strategies so that each one deals with the right portion of the assembly problem, is proposed. A time-varying weighted sum combines a recurrent RL method with a nominal strategy. The output serves as the reference for a task space impedance controller. The proposed approach can learn to insert an object in a frame within a few minutes of real-world training. A success rate of 94% in the presence of considerable uncertainties is observed. Furthermore, the approach is robust to changes in the experimental setup and task, even when no retrain is performed. For example, the same policy achieves a success rate of 85% when the object properties change

    A fast Monte-Carlo method with a Reduced Basis of Control Variates applied to Uncertainty Propagation and Bayesian Estimation

    Get PDF
    The Reduced-Basis Control-Variate Monte-Carlo method was introduced recently in [S. Boyaval and T. Leli\`evre, CMS, 8 2010] as an improved Monte-Carlo method, for the fast estimation of many parametrized expected values at many parameter values. We provide here a more complete analysis of the method including precise error estimates and convergence results. We also numerically demonstrate that it can be useful to some parametrized frameworks in Uncertainty Quantification, in particular (i) the case where the parametrized expectation is a scalar output of the solution to a Partial Differential Equation (PDE) with stochastic coefficients (an Uncertainty Propagation problem), and (ii) the case where the parametrized expectation is the Bayesian estimator of a scalar output in a similar PDE context. Moreover, in each case, a PDE has to be solved many times for many values of its coefficients. This is costly and we also use a reduced basis of PDE solutions like in [S. Boyaval, C. Le Bris, Nguyen C., Y. Maday and T. Patera, CMAME, 198 2009]. This is the first combination of various Reduced-Basis ideas to our knowledge, here with a view to reducing as much as possible the computational cost of a simple approach to Uncertainty Quantification

    Bayesian inferences of the thermal properties of a wall using temperature and heat flux measurements

    Get PDF
    The assessment of the thermal properties of walls is essential for accurate building energy simulations that are needed to make effective energy-saving policies. These properties are usually investigated through in-situ measurements of temperature and heat flux over extended time periods. The one-dimensional heat equation with unknown Dirichlet boundary conditions is used to model the heat transfer process through the wall. In [F. Ruggeri, Z. Sawlan, M. Scavino, R. Tempone, A hierarchical Bayesian setting for an inverse problem in linear parabolic PDEs with noisy boundary conditions, Bayesian Analysis 12 (2)(2017) 407-433], it was assessed the uncertainty about the thermal diffusivity parameter using different synthetic data sets. In this work, we adapt this methodology to an experimental study conducted in an environmental chamber, with measurements recorded every minute from temperature probes and heat flux sensors placed on both sides of a solid brick wall over a five-day period. The observed time series are locally averaged, according to a smoothing procedure determined by the solution of a criterion function optimization problem, to fit the required set of noise model assumptions. Therefore, after preprocessing, we can reasonably assume that the temperature and the heat flux measurements have stationary Gaussian noise and we can avoid working with full covariance matrices. The results show that our technique reduces the bias error of the estimated parameters when compared to other approaches. Finally, we compute the information gain under two experimental setups to recommend how the user can efficiently determine the duration of the measurement campaign and the range of the external temperature oscillation

    Fuzzy modeling for control

    No full text
    corecore